# large language model

Seed Coder
Seed-Coder is a series of open-source code large language models launched by the Seed Team of ByteDance. It includes base, instruction, and reasoning models, aiming to significantly enhance programming capabilities through minimal human effort and autonomous management of code training data. The model performs excellently among similar open-source models and is suitable for various coding tasks. It is positioned to promote the development of the open-source LLM ecosystem and is applicable to both research and industry.
large language model
39.7K

Zerosearch
ZeroSearch is a novel reinforcement learning framework designed to incentivize the search capabilities of large language models (LLMs) without interacting with actual search engines. Through supervised fine-tuning, ZeroSearch transforms LLMs into retrieval modules capable of generating relevant and irrelevant documents, and introduces a curriculum rollout mechanism to gradually enhance the model's reasoning ability. The main advantage of this technology lies in its superior performance compared to models based on real search engines, while incurring zero API costs. It is suitable for LLMs of all sizes and supports various reinforcement learning algorithms, making it ideal for research and development teams that require efficient retrieval capabilities.
search capability
39.7K

Internvl2 8B MPO
InternVL2-8B-MPO is a multimodal large language model (MLLM) that enhances multimodal inference capabilities by introducing a Mixed Preference Optimization (MPO) process. The model features an automated pipeline for preference data construction and builds the MMPR, a large-scale multimodal inference preference dataset. Based on the InternVL2-8B model, InternVL2-8B-MPO is fine-tuned using the MMPR dataset, demonstrating stronger multimodal inference capabilities with fewer hallucinations. The model achieved an accuracy of 67.0% on MathVista, surpassing the InternVL2-8B by 8.7 points, and performing closely to the much larger InternVL2-76B model.
AI Model
47.5K

Internvl 2.5
InternVL 2.5 is an advanced multimodal large language model series based on InternVL 2.0. While maintaining the core model architecture, it introduces significant enhancements in training and testing strategies as well as data quality. This model explores the relationship between model scalability and performance, systematically investigating performance trends across visual encoders, language models, dataset sizes, and test settings. Comprehensive evaluations across a wide range of benchmarks, including interdisciplinary reasoning, document understanding, multi-image/video comprehension, real-world understanding, multimodal hallucination detection, visual localization, multilingual capabilities, and pure language processing, demonstrate InternVL 2.5's competitiveness comparable to leading commercial models like GPT-4o and Claude-3.5-Sonnet. Notably, it is the first open-source MLLM to achieve over 70% on the MMMU benchmark, attaining a 3.7 percentage point improvement through Chain of Thought (CoT) reasoning, showcasing strong potential for scalability during testing.
AI Model
56.0K

Llama 3.1 Nemotron 70B Instruct
Llama-3.1-Nemotron-70B-Instruct is a large language model tailored by NVIDIA, focusing on improving the helpfulness of responses generated by large language models (LLM). This model excelled in various auto-alignment benchmark tests, such as Arena Hard, AlpacaEval 2 LC, and GPT-4-Turbo MT-Bench. It is trained using RLHF (specifically, the REINFORCE algorithm), Llama-3.1-Nemotron-70B-Reward, and HelpSteer2-Preference prompts on the Llama-3.1-70B-Instruct model. This model not only showcases NVIDIA's technological advances in enhancing generative support for general domain instructions but also offers a model conversion format compatible with the Hugging Face Transformers library, with free hosted inference available through NVIDIA's build platform.
AI Model
54.4K

Mistral Nemo Base 2407
Mistral-Nemo-Base-2407 is a 12B parameter large language model pre-trained by Mistral AI and NVIDIA. This model has been trained on multi-language and code data, and is significantly better than the existing models of the same or smaller scale. Its main features include: released under Apache 2.0 license, supporting pre-training and instruction version, 128k context window training, supporting multiple languages and code data, alternative product of Mistral 7B. The model architecture includes 40 layers, 5120-dimensional, 128 head-dimensional, 14364 hidden-dimensional, 32-head number, 8 kv-head (GQA), vocabulary size of about 128k, and rotative-embedding (θ=1M). The model performs well on multiple benchmarks, such as HellaSwag, Winogrande, OpenBookQA, etc.
AI Model
56.3K

Llama 3 70B Tool Use
Llama-3-70B-Tool-Use is a 70B parameter large language model optimized for advanced tool usage and feature call tasks. It achieves an overall accuracy of 90.76% on the Berkeley Feature Call Leaderboard (BFCL), outperforming all open-source 70B language models. This model enhances transformer architecture and is fine-tuned and trained with Direct Preference Optimization (DPO) on top of the Llama 3 70B base model. It takes text as input and produces text as output, with enhanced tool usage and feature call capabilities. While its main use case is tool usage and feature calls, it may be more appropriate for general knowledge or open-ended tasks to use a general language model. The model may produce inaccurate or biased content in some cases, and users should implement appropriate safety measures suitable for their specific use cases. The model is highly sensitive to temperature and top_p sampling configurations and requires proper adjustments to optimize performance.
AI Model
49.1K

Opencompass 2.0 Large Language Model Leaderboard
OpenCompass 2.0 is a platform dedicated to evaluating the performance of large language models. It utilizes multiple closed-source datasets for multi-dimensional assessments, providing models with an overall average score and specialized skill scores. The platform helps developers and researchers understand the performance of different models in areas like language, knowledge, reasoning, mathematics, and programming through its real-time updated leaderboard.
AI Model Evaluation
62.4K
Featured AI Tools

Flow AI
Flow is an AI-driven movie-making tool designed for creators, utilizing Google DeepMind's advanced models to allow users to easily create excellent movie clips, scenes, and stories. The tool provides a seamless creative experience, supporting user-defined assets or generating content within Flow. In terms of pricing, the Google AI Pro and Google AI Ultra plans offer different functionalities suitable for various user needs.
Video Production
43.1K

Nocode
NoCode is a platform that requires no programming experience, allowing users to quickly generate applications by describing their ideas in natural language, aiming to lower development barriers so more people can realize their ideas. The platform provides real-time previews and one-click deployment features, making it very suitable for non-technical users to turn their ideas into reality.
Development Platform
44.7K

Listenhub
ListenHub is a lightweight AI podcast generation tool that supports both Chinese and English. Based on cutting-edge AI technology, it can quickly generate podcast content of interest to users. Its main advantages include natural dialogue and ultra-realistic voice effects, allowing users to enjoy high-quality auditory experiences anytime and anywhere. ListenHub not only improves the speed of content generation but also offers compatibility with mobile devices, making it convenient for users to use in different settings. The product is positioned as an efficient information acquisition tool, suitable for the needs of a wide range of listeners.
AI
42.5K

Minimax Agent
MiniMax Agent is an intelligent AI companion that adopts the latest multimodal technology. The MCP multi-agent collaboration enables AI teams to efficiently solve complex problems. It provides features such as instant answers, visual analysis, and voice interaction, which can increase productivity by 10 times.
Multimodal technology
43.1K
Chinese Picks

Tencent Hunyuan Image 2.0
Tencent Hunyuan Image 2.0 is Tencent's latest released AI image generation model, significantly improving generation speed and image quality. With a super-high compression ratio codec and new diffusion architecture, image generation speed can reach milliseconds, avoiding the waiting time of traditional generation. At the same time, the model improves the realism and detail representation of images through the combination of reinforcement learning algorithms and human aesthetic knowledge, suitable for professional users such as designers and creators.
Image Generation
42.2K

Openmemory MCP
OpenMemory is an open-source personal memory layer that provides private, portable memory management for large language models (LLMs). It ensures users have full control over their data, maintaining its security when building AI applications. This project supports Docker, Python, and Node.js, making it suitable for developers seeking personalized AI experiences. OpenMemory is particularly suited for users who wish to use AI without revealing personal information.
open source
42.8K

Fastvlm
FastVLM is an efficient visual encoding model designed specifically for visual language models. It uses the innovative FastViTHD hybrid visual encoder to reduce the time required for encoding high-resolution images and the number of output tokens, resulting in excellent performance in both speed and accuracy. FastVLM is primarily positioned to provide developers with powerful visual language processing capabilities, applicable to various scenarios, particularly performing excellently on mobile devices that require rapid response.
Image Processing
41.7K
Chinese Picks

Liblibai
LiblibAI is a leading Chinese AI creative platform offering powerful AI creative tools to help creators bring their imagination to life. The platform provides a vast library of free AI creative models, allowing users to search and utilize these models for image, text, and audio creations. Users can also train their own AI models on the platform. Focused on the diverse needs of creators, LiblibAI is committed to creating inclusive conditions and serving the creative industry, ensuring that everyone can enjoy the joy of creation.
AI Model
6.9M